首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   225篇
  免费   29篇
  2023年   1篇
  2021年   1篇
  2020年   3篇
  2019年   5篇
  2018年   3篇
  2017年   6篇
  2016年   12篇
  2015年   14篇
  2014年   9篇
  2013年   63篇
  2012年   10篇
  2011年   7篇
  2010年   6篇
  2009年   7篇
  2008年   9篇
  2007年   13篇
  2006年   8篇
  2005年   11篇
  2004年   8篇
  2003年   11篇
  2002年   11篇
  2001年   6篇
  2000年   6篇
  1999年   9篇
  1998年   3篇
  1997年   1篇
  1993年   1篇
  1990年   1篇
  1986年   2篇
  1985年   2篇
  1981年   2篇
  1978年   1篇
  1973年   1篇
  1967年   1篇
排序方式: 共有254条查询结果,搜索用时 15 毫秒
81.
We study a generalization of the weighted set covering problem where every element needs to be covered multiple times. When no set contains more than two elements, we can solve the problem in polynomial time by solving a corresponding weighted perfect b‐matching problem. In general, we may use a polynomial‐time greedy heuristic similar to the one for the classical weighted set covering problem studied by D.S. Johnson [Approximation algorithms for combinatorial problems, J Comput Syst Sci 9 (1974), 256–278], L. Lovasz [On the ratio of optimal integral and fractional covers, Discrete Math 13 (1975), 383–390], and V. Chvatal [A greedy heuristic for the set‐covering problem, Math Oper Res 4(3) (1979), 233–235] to get an approximate solution for the problem. We find a worst‐case bound for the heuristic similar to that for the classical problem. In addition, we introduce a general type of probability distribution for the population of the problem instances and prove that the greedy heuristic is asymptotically optimal for instances drawn from such a distribution. We also conduct computational studies to compare solutions resulting from running the heuristic and from running the commercial integer programming solver CPLEX on problem instances drawn from a more specific type of distribution. The results clearly exemplify benefits of using the greedy heuristic when problem instances are large. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2005  相似文献   
82.
We deal with the problem of minimizing makespan on a single batch processing machine. In this problem, each job has both processing time and size (capacity requirement). The batch processing machine can process a number of jobs simultaneously as long as the total size of these jobs being processed does not exceed the machine capacity. The processing time of a batch is just the processing time of the longest job in the batch. An approximation algorithm with worst‐case ratio 3/2 is given for the version where the processing times of large jobs (with sizes greater than 1/2) are not less than those of small jobs (with sizes not greater than 1/2). This result is the best possible unless P = NP. For the general case, we propose an approximation algorithm with worst‐case ratio 7/4. A number of heuristics by Uzosy are also analyzed and compared. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 226–240, 2001  相似文献   
83.
We consider a container terminal discharging containers from a ship and locating them in the terminal yard. Each container has a number of potential locations in the yard where it can be stored. Containers are moved from the ship to the yard using a fleet of vehicles, each of which can carry one container at a time. The problem is to assign each container to a yard location and dispatch vehicles to the containers so as to minimize the time it takes to download all the containers from the ship. We show that the problem is NP‐hard and develop a heuristic algorithm based on formulating the problem as an assignment problem. The effectiveness of the heuristic is analyzed from both worst‐case and computational points of view. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 363–385, 2001  相似文献   
84.
In this paper, we present an O(nm log(U/n)) time maximum flow algorithm. If U = O(n) then this algorithm runs in O(nm) time for all values of m and n. This gives the best available running time to solve maximum flow problems satisfying U = O(n). Furthermore, for unit capacity networks the algorithm runs in O(n2/3m) time. It is a two‐phase capacity scaling algorithm that is easy to implement and does not use complex data structures. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 511–520, 2000  相似文献   
85.
In this paper, we consider a new weapon–target allocation problem with the objective of minimizing the overall firing cost. The problem is formulated as a nonlinear integer programming model. We applied Lagrangian relaxation and a branch‐and‐bound method to the problem after transforming the nonlinear constraints into linear ones. An efficient primal heuristic is developed to find a feasible solution to the problem to facilitate the procedure. In the branch‐and‐bound method, three different branching rules are considered and the performances are evaluated. Computational results using randomly generated data are presented. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 640–653, 1999  相似文献   
86.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   
87.
This paper introduces a general or “distribution‐free” model to analyze the lifetime of components under accelerated life testing. Unlike the accelerated failure time (AFT) models, the proposed model shares the advantage of being “distribution‐free” with the proportional hazard (PH) model and overcomes the deficiency of the PH model not allowing survival curves corresponding to different values of a covariate to cross. In this research, we extend and modify the extended hazard regression (EHR) model using the partial likelihood function to analyze failure data with time‐dependent covariates. The new model can be easily adopted to create an accelerated life testing model with different types of stress loading. For example, stress loading in accelerated life testing can be a step function, cyclic, or linear function with time. These types of stress loadings reduce the testing time and increase the number of failures of components under test. The proposed EHR model with time‐dependent covariates which incorporates multiple stress loadings requires further verification. Therefore, we conduct an accelerated life test in the laboratory by subjecting components to time‐dependent stresses, and we compare the reliability estimation based on the developed model with that obtained from experimental results. The combination of the theoretical development of the accelerated life testing model verified by laboratory experiments offers a unique perspective to reliability model building and verification. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 303–321, 1999  相似文献   
88.
We consider the problem of scheduling multiprocessor tasks with prespecified processor allocations to minimize the total completion time. The complexity of both preemptive and nonpreemptive cases of the two-processor problem are studied. We show that the preemptive case is solvable in O(n log n) time. In the nonpreemptive case, we prove that the problem is NP-hard in the strong sense, which answers an open question mentioned in Hoogeveen, van de Velde, and Veltman (1994). An efficient heuristic is also developed for this case. The relative error of this heuristic is at most 100%. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 231–242, 1998  相似文献   
89.
Burn-in is the preconditioning of assemblies and the accelerated power-on tests performed on equipment subject to temperature, vibration, voltage, radiation, load, corrosion, and humidity. Burn-in techniques are widely applied to integrated circuits (IC) to enhance the component and system reliability. However, reliability prediction by burn-in at the component level, such as the one using the military (e.g., MIL-STD-280A, 756B, 217E [23–25]) and the industrial standards (e.g., the JEDEC standards), is usually not consistent with the field observations. Here, we propose system burn-in, which can remove many of the residual defects left from component and subsystem burn-in (Chien and Kuo [6]). A nonparametric model is considered because 1) the system configuration is usually very complicated, 2) the components in the system have different failure mechanisms, and 3) there is no good model for modeling incompatibility among components and subsystems (Chien and Kuo [5]; Kuo [16]). Since the cost of testing a system is high and, thus, only small samples are available, a Bayesian nonparametric approach is proposed to determine the system burn-in time. A case study using the proposed approach on MCM ASIC's shows that our model can be applied in the cases where 1) the tests and the samples are expensive, and 2) the records of previous generation of the products can provide information on the failure rate of the system under investigation. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44: 655–671, 1997  相似文献   
90.
In this paper we consider an inventory model in which the retailer does not know the exact distribution of demand and thus must use some observed demand data to forecast demand. We present an extension of the basic newsvendor model that allows us to quantify the value of the observed demand data and the impact of suboptimal forecasting on the expected costs at the retailer. We demonstrate the approach through an example in which the retailer employs a commonly used forecasting technique, exponential smoothing. The model is also used to quantify the value of information and information sharing for a decoupled supply chain in which both the retailer and the manufacturer must forecast demand. © 2003 Wiley Periodicals, Inc. Naval Research Logistics 50: 388–411, 2003  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号